Assignment 3

Due date: Monday 10/18, end of day

This assignment will contain two parts:

  1. Exploring evictions and code violations in Philadelphia
  2. Comparing the NDVI in Philadelphia

Part 1: Exploring Evictions and Code Violations in Philadelphia

In this assignment, we'll explore spatial trends evictions in Philadelphia using data from the Eviction Lab and building code violations using data from OpenDataPhilly.

We'll be exploring the idea that evictions can occur as retaliation against renters for reporting code violations. Spatial correlations between evictions and code violations from the City's Licenses and Inspections department can offer some insight into this question.

A couple of interesting background readings:

1.1 Explore Eviction Lab Data

The Eviction Lab built the first national database for evictions. If you aren't familiar with the project, you can explore their website: https://evictionlab.org/

1.1.1 Read data using geopandas

The first step is to read the eviction data by census tract using geopandas. The data for all of Pennsylvania by census tract is available in the data/ folder in a GeoJSON format.

Load the data file "PA-tracts.geojson" using geopandas

Note: If you'd like to see all columns in the data frame, you can increase the max number of columns using pandas display options:

In [1]:
import matplotlib
import pandas as pd
import geopandas as gpd
import hvplot.pandas
import geoviews as gv
import geoviews.tile_sources as gvts
import holoviews as hv
import numpy as np
import rasterio as rio
import contextily as cx
hv.extension('bokeh', 'matplotlib')
In [2]:
from matplotlib import pyplot as plt
from holoviews import opts
from rasterio.mask import mask
from rasterstats import zonal_stats
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
In [3]:
pd.options.display.max_columns = 999
pd.options.display.max_rows = 999
In [4]:
evPA = gpd.read_file("./data/PA-tracts.geojson")

1.1.2 Explore and trim the data

We will need to trim data to Philadelphia only. Take a look at the data dictionary for the descriptions of the various columns in top-level repository folder: eviction_lab_data_dictionary.txt

Note: the column names are shortened — see the end of the above file for the abbreviations. The numbers at the end of the columns indicate the years. For example, e-16 is the number of evictions in 2016.

Take a look at the individual columns and trim to census tracts in Philadelphia. (Hint: Philadelphia is both a city and a county).

In [5]:
evphilly = evPA.loc[(evPA['pl']=="Philadelphia County, Pennsylvania")]

1.1.3 Transform from wide to tidy format

For this assignment, we are interested in the number of evictions by census tract for various years. Right now, each year has it's own column, so it will be easiest to transform to a tidy format.

Use the pd.melt() function to transform the eviction data into tidy format, using the number of evictions from 2003 to 2016.

The tidy data frame should have four columns: GEOID, geometry, a column holding the number of evictions, and a column telling you what the name of the original column was for that value.

Hints:

  • You'll want to specify the GEOID and geometry columns as the id_vars. This will keep track of the census tract information.
  • You should specify the names of the columns holding the number of evictions as the value_vars.
  • You can generate a list of this column names using [Python's string formatting]:(https://docs.python.org/3.7/library/string.html#format-examples)
    value_vars = ['e-{:02d}'.format(x) for x in range(3, 17)]
    

Trim to specific columns:

In [6]:
needcolumns = ['GEOID','geometry']
value_vars = ['e-{:02d}'.format(x) for x in range(3, 17)]
needcolumns.extend(value_vars)

Data Melting:

In [7]:
evphilly = evphilly[needcolumns]
value_vars = ['e-{:02d}'.format(x) for x in range(3, 17)]
mlt_evphilly = evphilly.melt(
    id_vars=['GEOID','geometry'],
    value_vars=value_vars,
    var_name='year', 
    value_name='evictions')

1.1.4 Plot the total number of evictions per year from 2003 to 2016

Use hvplot to plot the total number of evictions from 2003 to 2016. You will first need to perform a group by operation and sum up the total number of evictions for all census tracts, and then use hvplot() to make your plot.

You can use any type of hvplot chart you'd like to show the trend in number of evictions over time.

In [8]:
group1_evphilly = mlt_evphilly
group1_evphilly = group1_evphilly.groupby('year',as_index=True)['evictions'].sum()
In [9]:
bar1=group1_evphilly.hvplot(
    kind='bar',
    color='#FE5C5A',
    hover_color = '#FFBA48',
    line_color = '#FE5C5A',
    rot=90, 
    width=800,
    height=300,
    title='The total number of evictions per year from 2003 to 2016',
    xlabel='Year',
    ylabel ='Total Number',
    ylim=tuple([0,16000]),
    attr_labels=True)

dot1=group1_evphilly.hvplot(
    kind='scatter',
    color='#31FFCE',
    size = 50,
    hover_color = '#FFBA48',
    rot=90, 
    width=800,
    height=300,
    attr_labels=True)

line1=group1_evphilly.hvplot(
    kind='line', 
    color='#27CCA4',
    width=800,
    height=300)

final_chart = bar1 * line1 * dot1
final_chart.opts(bgcolor='#2C2C5C')
Out[9]:

1.1.5 The number of evictions across Philadelphia

Our tidy data frame is still a GeoDataFrame with a geometry column, so we can visualize the number of evictions for all census tracts.

Use hvplot() to generate a choropleth showing the number of evictions for a specified year, with a widget dropdown to select a given year (or variable name, e.g., e-16, e-15, etc).

Hints

  • You'll need to use the groupby keyword to tell hvplot to make a series of maps, with a widget to select between them.
  • You will need to specify dynamic=False as a keyword argument to the hvplot() function.
  • Be sure to specify a width and height that makes your output map (roughly) square to limit distortions
In [10]:
mlt_evphilly
Out[10]:
GEOID geometry year evictions
0 42101000100 MULTIPOLYGON (((-75.14161 39.95549, -75.14163 ... e-03 21.0
1 42101000200 MULTIPOLYGON (((-75.15122 39.95686, -75.15167 ... e-03 3.0
2 42101000300 MULTIPOLYGON (((-75.16234 39.95782, -75.16237 ... e-03 17.0
3 42101000801 MULTIPOLYGON (((-75.17732 39.95096, -75.17784 ... e-03 13.0
4 42101000804 MULTIPOLYGON (((-75.17118 39.94778, -75.17102 ... e-03 21.0
... ... ... ... ...
5371 42101017800 MULTIPOLYGON (((-75.11339 39.99649, -75.11137 ... e-16 104.0
5372 42101017900 MULTIPOLYGON (((-75.10591 39.98804, -75.10836 ... e-16 80.0
5373 42101018002 MULTIPOLYGON (((-75.10506 39.98707, -75.10437 ... e-16 32.0
5374 42101018300 MULTIPOLYGON (((-75.06581 39.98629, -75.06859 ... e-16 7.0
5375 42101018400 MULTIPOLYGON (((-75.05902 39.99251, -75.05954 ... e-16 2.0

5376 rows × 4 columns

In [11]:
group_evphilly = mlt_evphilly
geo_evphilly = mlt_evphilly.loc[(mlt_evphilly['year']=="e-03")]
geo_evphilly = geo_evphilly.drop(['year','evictions'],axis=1)
group_evphilly = group_evphilly.groupby(['year','GEOID'],as_index=False).sum('evictions')
group_evphilly = group_evphilly.merge(geo_evphilly, on='GEOID')
In [12]:
hv.output(widget_location='bottom')
choro = group_evphilly.hvplot(c='evictions', 
                     frame_width=600, 
                     frame_height=600, 
                     groupby = 'year',
                     alpha=0.65,
                     geo=True, 
                     cmap='viridis', 
                     hover_cols=['GEOID'],
                     dynamic=False,
                     )
total = gvts.Wikipedia * choro
total.opts(
    opts.WMTS(width=100, height=100, xaxis=None, yaxis=None),
    opts.Overlay(title="The number of evictions across Philadelphia"))
Out[12]:

1.2 Code Violations in Philadelphia

Next, we'll explore data for code violations from the Licenses and Inspections Department of Philadelphia to look for potential correlations with the number of evictions.

1.2.1 Load data from 2012 to 2016

L+I violation data for years including 2012 through 2016 (inclusive) is provided in a CSV format in the "data/" folder.

Load the data using pandas and convert to a GeoDataFrame.

In [13]:
LViolation = pd.read_csv("./data/li_violations.csv")
LViolation['geometry'] = gpd.points_from_xy(LViolation['lng'], LViolation['lat'])
LViolation = gpd.GeoDataFrame(LViolation, geometry='geometry', crs="EPSG:4326")

1.2.2 Trim to specific violation types

There are many different types of code violations (running the nunique() function on the violationdescription column will extract all of the unique ones). More information on different types of violations can be found on the City's website.

Below, I've selected 15 types of violations that deal with property maintenance and licensing issues. We'll focus on these violations. The goal is to see if these kinds of violations are correlated spatially with the number of evictions in a given area.

Use the list of violations given to trim your data set to only include these types.

In [14]:
violation_types = [
    "INT-PLMBG MAINT FIXTURES-RES",
    "INT S-CEILING REPAIR/MAINT SAN",
    "PLUMBING SYSTEMS-GENERAL",
    "CO DETECTOR NEEDED",
    "INTERIOR SURFACES",
    "EXT S-ROOF REPAIR",
    "ELEC-RECEPTABLE DEFECTIVE-RES",
    "INT S-FLOOR REPAIR",
    "DRAINAGE-MAIN DRAIN REPAIR-RES",
    "DRAINAGE-DOWNSPOUT REPR/REPLC",
    "LIGHT FIXTURE DEFECTIVE-RES",
    "LICENSE-RES SFD/2FD",
    "ELECTRICAL -HAZARD",
    "VACANT PROPERTIES-GENERAL",
    "INT-PLMBG FIXTURES-RES",
]
In [15]:
slc_LViolation = LViolation.loc[LViolation['violationdescription'].isin(violation_types)]

1.2.3 Make a hex bin map

The code violation data is point data. We can get a quick look at the geographic distribution using matplotlib and the hexbin() function. Make a hex bin map of the code violations and overlay the census tract outlines.

Hints:

  • The eviction data from part 1 was by census tract, so the census tract geometries are available as part of that GeoDataFrame. You can use it to overlay the census tracts on your hex bin map.
  • Make sure you convert your GeoDataFrame to a CRS that's better for visualization than plain old 4326.
In [43]:
slc_LViolation
Out[43]:
lat lng violationdescription geometry
2 40.050593 -75.126578 LICENSE-RES SFD/2FD POINT (-75.12658 40.05059)
25 40.022406 -75.121872 EXT S-ROOF REPAIR POINT (-75.12187 40.02241)
30 40.023237 -75.121726 CO DETECTOR NEEDED POINT (-75.12173 40.02324)
31 40.023397 -75.122241 INT S-CEILING REPAIR/MAINT SAN POINT (-75.12224 40.02340)
34 40.023773 -75.121603 INT S-FLOOR REPAIR POINT (-75.12160 40.02377)
... ... ... ... ...
433982 39.962287 -75.226644 CO DETECTOR NEEDED POINT (-75.22664 39.96229)
433985 39.968669 -75.212576 CO DETECTOR NEEDED POINT (-75.21258 39.96867)
434013 39.950209 -75.227244 INT S-CEILING REPAIR/MAINT SAN POINT (-75.22724 39.95021)
434043 39.936179 -75.192078 INT S-FLOOR REPAIR POINT (-75.19208 39.93618)
434046 40.012805 -75.155963 ELECTRICAL -HAZARD POINT (-75.15596 40.01281)

34108 rows × 4 columns

In [16]:
hv.output(widget_location='bottom')

hex1 = slc_LViolation.hvplot.hexbin(
                      frame_width=600, 
                      frame_height=600, 
                      x='lng', 
                      y='lat', 
                      groupby = "violationdescription", 
                      logz=True, 
                      geo=True, 
                      gridsize=40, 
                      cmap='viridis',
                      dynamic=False)

boundary1 = geo_evphilly.hvplot.polygons(
                      geo=True,
                      alpha=0,
                      line_alpha=0.5,
                      line_width =1,
                      line_color="black",
                      hover=False,
                      width=600,
                      height=600,)

combination = gvts.Wikipedia*hex1 * boundary1
combination.opts(
    opts.WMTS(width=100, height=100, xaxis=None, yaxis=None),
    opts.Overlay(title="Geographic distribution of code violation"))
Out[16]:

1.2.4 Spatially join data sets

To do a census tract comparison to our eviction data, we need to find which census tract each of the code violations falls into. Use the geopandas.sjoin() function to do just that.

Hints

  • You can re-use your eviction data frame, but you will only need the geometry column (specifying census tract polygons) and the GEOID column (specifying the name of each census tract).
  • Make sure both data frames have the same CRS before joining them together!

1.2.5 Calculate the number of violations by type per census tract

Next, we'll want to find the number of violations (for each kind) per census tract. You should group the data frame by violation type and census tract name.

The result of this step should be a data frame with three columns: violationdescription, GEOID, and N, where N is the number of violations of that kind in the specified census tract.

Optional: to make prettier plots

Some census tracts won't have any violations, and they won't be included when we do the above calculation. However, there is a trick to set the values for those census tracts to be zero. After you calculate the sizes of each violation/census tract group, you can run:

N = N.unstack(fill_value=0).stack().reset_index(name='N')

where N gives the total size of each of the groups, specified by violation type and census tract name.

See this StackOverflow post for more details.

This part is optional, but will make the resulting maps a bit prettier.

In [17]:
tracts_slc_LViolation = slc_LViolation
tracts_slc_LViolation = gpd.sjoin(tracts_slc_LViolation, geo_evphilly, how='right')
In [18]:
numtract = tracts_slc_LViolation
numtract["N"] = 1
numtract = numtract.groupby(['GEOID','violationdescription'])['N'].sum()
numtract = numtract.unstack(fill_value=0).stack().reset_index(name='N')
numtract.head()
Out[18]:
GEOID violationdescription N
0 42101000100 CO DETECTOR NEEDED 0
1 42101000100 DRAINAGE-DOWNSPOUT REPR/REPLC 6
2 42101000100 DRAINAGE-MAIN DRAIN REPAIR-RES 0
3 42101000100 ELEC-RECEPTABLE DEFECTIVE-RES 0
4 42101000100 ELECTRICAL -HAZARD 1

1.2.6 Merge with census tracts geometries

We now have the number of violations of different types per census tract specified as a regular DataFrame. You can now merge it with the census tract geometries (from your eviction data GeoDataFrame) to create a GeoDataFrame.

Hints

  • Use pandas.merge() and specify the on keyword to be the column holding census tract names.
  • Make sure the result of the merge operation is a GeoDataFrame — you will want the GeoDataFrame holding census tract geometries to be the first argument of the pandas.merge() function.
In [19]:
numtract = geo_evphilly.merge(numtract, on='GEOID')

1.2.7 Interactive choropleths for each violation type

Now, we can use hvplot() to create an interactive choropleth for each violation type and add a widget to specify different violation types.

Hints

  • You'll need to use the groupby keyword to tell hvplot to make a series of maps, with a widget to select different violation types.
  • You will need to specify dynamic=False as a keyword argument to the hvplot() function.
  • Be sure to specify a width and height that makes your output map (roughly) square to limit distortions
In [20]:
hv.output(widget_location='bottom')

pol1 = numtract.hvplot.polygons(c='N',
                      frame_width=600, 
                      frame_height=600,  
                      groupby = "violationdescription", 
                      geo=True, 
                      cmap='viridis',
                      alpha=0.7,
                      hover_cols='all',
                      dynamic=False)

combination2 = gvts.Wikipedia * pol1
combination2.opts(
    opts.WMTS(width=600, height=600, xaxis=None, yaxis=None),
    opts.Overlay(title="Geographic distribution of code violation"))
Out[20]:

1.3. A side-by-side comparison

From the interactive maps of evictions and violations, you should notice a lot of spatial overlap.

As a final step, we'll make a side-by-side comparison to better show the spatial correlations. This will involve a few steps:

  1. Trim the data frame plotted in section 1.1.5 to only include evictions from 2016.
  2. Trim the data frame plotted in section 1.2.7 to only include a single violation type (pick whichever one you want!).
  3. Use hvplot() to make two interactive choropleth maps, one for the data from step 1. and one for the data in step 2.
  4. Show these two plots side by side (one row and 2 columns) using the syntax for combining charts.

Note: since we selected a single year and violation type, you won't need to use the groupby= keyword here.

In [21]:
evphilly_16 = group_evphilly.loc[(group_evphilly['year']=='e-16')]
specific_numtract = numtract.loc[(numtract['violationdescription']=='VACANT PROPERTIES-GENERAL')]

choro2 = evphilly_16.hvplot(c='evictions', 
                     frame_width=400, 
                     frame_height=600, 
                     alpha=0.7,
                     geo=True, 
                     cmap='viridis', 
                     hover_cols='all',
                     dynamic=False,
                     title="Geo-distribution of eviction in 2016",
                     fontsize=9
                     )

pol2 = specific_numtract.hvplot(c='N',
                      frame_width=400, 
                      frame_height=600,  
                      geo=True, 
                      cmap='Plasma',
                      alpha=0.7,
                      hover_cols='all',
                      dynamic=False,
                      title="Geo-distribution of the violation - VACANT PROPERTIES-GENERAL",
                      fontsize=9
                      )

combination3 = (gvts.Wikipedia * choro2 + gvts.Wikipedia * pol2).cols(2)
combination3
Out[21]:

1.4. Extra Credit

Identify the 20 most common types of violations within the time period of 2012 to 2016 and create a set of interactive choropleths similar to what was done in section 1.2.7.

Use this set of maps to identify 3 types of violations that don't seem to have much spatial overlap with the number of evictions in the City.

In [22]:
top20_vilation = LViolation
top20_vilation['N']=1
top20_vilation = top20_vilation.groupby('violationdescription',as_index=False).sum('N').sort_values(['N'],ascending = False)
top20_vilation = top20_vilation.iloc[0:20]
top20_vilation = top20_vilation.drop(['lat','lng'],axis=1)
top20_vilation = top20_vilation.reset_index(drop=True)
top20 = top20_vilation['violationdescription']
In [23]:
top_2_LViolation = LViolation.loc[LViolation['violationdescription'].isin(top20)]
top_2_LViolation = gpd.sjoin(top_2_LViolation, geo_evphilly, how='right')
top20_numtract = top_2_LViolation
top20_numtract = top20_numtract.groupby(['GEOID','violationdescription'])['N'].sum()
top20_numtract = top20_numtract.unstack(fill_value=0).stack().reset_index(name='N')
top20_numtract = geo_evphilly.merge(top20_numtract, on='GEOID')
In [24]:
total_evphilly = mlt_evphilly
total_evphilly = total_evphilly.groupby('GEOID',as_index=False).sum('evictions')
total_evphilly = total_evphilly.merge(geo_evphilly,on='GEOID')
In [25]:
points_evphilly = total_evphilly
points_evphilly['geometry'] = points_evphilly['geometry'].centroid
points_evphilly.crs =total_evphilly.crs
points_evphilly['eviction2']=points_evphilly['evictions']/3
C:\Users\shaun\AppData\Local\Temp/ipykernel_25188/1105749850.py:2: UserWarning: Geometry is in a geographic CRS. Results from 'centroid' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.

  points_evphilly['geometry'] = points_evphilly['geometry'].centroid
In [26]:
hv.output(widget_location='bottom')

pol2 = top20_numtract.hvplot.polygons(c='N',
                      frame_width=600, 
                      frame_height=600,  
                      groupby = "violationdescription", 
                      geo=True, 
                      cmap='viridis',
                      alpha=1,
                      hover_cols='all',
                      dynamic=False)


point3 = points_evphilly.hvplot(
                      size='eviction2',
                      line_color = '#38CBE7',
                      frame_width=600, 
                      frame_height=600,  
                      geo=True, 
                      c='#38CBE7',
                      alpha=0.4,
                      dynamic=False)

combination3 = gvts.Wikipedia * pol2 * point3
combination3.opts(
    opts.WMTS(width=600, height=600, xaxis=None, yaxis=None),
    opts.Overlay(title="Geographic distribution of Top20 code violation"))
Out[26]:

Conclusion

  • 'EXTA-VACANT LOT CLEAN/MAINTAI',
  • 'LICENSE - RENTAL PROPERTY',&
  • 'RUBBISH/GARBAGE EXTERIOR-OWNER'

are three types of violations that don't seem to have much spatial overlap with the number of evictions in the City.

Part 2: Exploring the NDVI in Philadelphia

In this part, we'll explore the NDVI in Philadelphia a bit more. This part will include two parts:

  1. We'll compare the median NDVI within the city limits and the immediate suburbs
  2. We'll calculate the NDVI around street trees in the city.

2.1 Comparing the NDVI in the city and the suburbs

2.1.1 Load Landsat data for Philadelphia

Use rasterio to load the landsat data for Philadelphia (available in the "data/" folder)

In [27]:
landsat = rio.open("./data/landsat8_philly.tif")

2.1.2 Separating the city from the suburbs

Create two polygon objects, one for the city limits and one for the suburbs. To calculate the suburbs polygon, we will take everything outside the city limits but still within the bounding box.

  • The city limits are available in the "data/" folder.
  • To calculate the suburbs polygon, the "envelope" attribute of the city limits geometry will be useful.
  • You can use geopandas' geometric manipulation functionality to calculate the suburbs polygon from the city limits polygon and the envelope polygon.
In [28]:
c_limit = gpd.read_file("./data/City_Limits.geojson")
c_limit = c_limit.to_crs(landsat.crs.data['init'])
In [29]:
env = c_limit.envelope
envgdf = gpd.GeoDataFrame(gpd.GeoSeries(env))
envgdf = envgdf.rename(columns={0:'geometry'}).set_geometry('geometry')
envgdf = envgdf.to_crs(landsat.crs.data['init'])
In [30]:
sub_limit = gpd.overlay(envgdf,c_limit, how='difference')

2.1.3 Mask and calculate the NDVI for the city and the suburbs

Using the two polygons from the last section, use rasterio's mask functionality to create two masked arrays from the landsat data, one for the city and one for the suburbs.

For each masked array, calculate the NDVI.

In [31]:
city, mask_transform = mask(
    dataset=landsat,
    shapes=c_limit.geometry,
    crop=True,  
    all_touched=True,  
    filled=False, 
)

suburb, mask_transform = mask(
    dataset=landsat,
    shapes=sub_limit.geometry,
    crop=True,  
    all_touched=True,  
    filled=False, 
)
In [32]:
c_red = city[3]
c_nir = city[4]
sub_red = suburb[3]
sub_nir = suburb[4]
In [33]:
def calculate_NDVI(nir, red):
    """
    Calculate the NDVI from the NIR and red landsat bands
    """
    nir = nir.astype(float)
    red = red.astype(float)
    
    check = np.logical_and( red.mask == False, nir.mask == False )
    
    ndvi = np.where(check,  (nir - red ) / ( nir + red ), np.nan )
    return ndvi 
In [34]:
city_NDVI = calculate_NDVI(c_nir, c_red)
sub_NDVI = calculate_NDVI(sub_nir, sub_red)
In [35]:
fig, ax = plt.subplots(figsize=(10,10))
landsat_extent = [
    landsat.bounds.left,
    landsat.bounds.right,
    landsat.bounds.bottom,
    landsat.bounds.top,
]
img = ax.imshow(city_NDVI,extent=landsat_extent)
c_limit.boundary.plot(ax=ax, edgecolor='gray', facecolor='none', linewidth=4)

plt.colorbar(img)
ax.set_axis_off()
ax.set_title("NDVI in City of Philadelphia", fontsize=18);

fig, ax = plt.subplots(figsize=(10,10)) landsat_extent = [ landsat.bounds.left, landsat.bounds.right, landsat.bounds.bottom, landsat.bounds.top, ] img = ax.imshow(sub_NDVI,extent=landsat_extent) sub_limit.boundary.plot(ax=ax, edgecolor='gray', facecolor='none', linewidth=4)

plt.colorbar(img) ax.set_axis_off() ax.set_title("NDVI in Suburb of Philadelphia", fontsize=18)

2.1.4 Calculate the median NDVI within the city and within the suburbs

  • Calculate the median value from your NDVI arrays for the city and suburbs
  • Numpy's nanmedian function will be useful for ignoring NaN elements
  • Print out the median values. Which has a higher NDVI: the city or suburbs?
In [36]:
med_sub = np.nanmedian(sub_NDVI)
med_city = np.nanmedian(city_NDVI)
print("median NDVI within the city:",med_city,"   median NDVI within the suburbs:",med_sub)
median NDVI within the city: 0.20268593532493442    median NDVI within the suburbs: 0.3746654463028859

Conclusion: suburbs has a higher NDVI

c_stats = zonal_stats( c_limit, city_NDVI, affine=landsat.transform, stats=['median'])

sub_stats = zonal_stats( sub_limit, sub_NDVI, affine=landsat.transform, stats=['median']) print(c_stats,sub_stats)

2.2 Calculating the NDVI for Philadelphia's street treets

2.2.1 Load the street tree data

The data is available in the "data/" folder. It has been downloaded from OpenDataPhilly. It contains the locations of abot 2,500 street trees in Philadelphia.

In [37]:
st_trees = gpd.read_file("./data/ppr_tree_canopy_points_2015.geojson")
st_trees = st_trees.to_crs(landsat.crs.data['init'])

2.2.2 Calculate the NDVI values at the locations of the street trees

  • Use the rasterstats package to calculate the NDVI values at the locations of the street trees.
  • Since these are point geometries, you can calculate either the median or the mean statistic (only one pixel will contain each point).
In [38]:
tree_NDVI = zonal_stats(
    st_trees, 
    city_NDVI, 
    affine=landsat.transform, 
    stats=['mean'])
D:\Miniconda_Python\envs\musa-550-fall-2021\lib\site-packages\rasterstats\io.py:302: UserWarning: Setting nodata to -999; specify nodata explicitly
  warnings.warn("Setting nodata to -999; specify nodata explicitly")
In [39]:
st_trees['NDVI'] = [s['mean'] for s in tree_NDVI] 
st_trees.head()
Out[39]:
objectid fcode geometry NDVI
0 1 3000 POINT (499541.269 4434698.265) 0.235337
1 2 3000 POINT (488932.471 4424093.158) 0.261535
2 3 3000 POINT (489039.214 4425985.827) 0.096769
3 4 3000 POINT (488993.171 4426088.005) 0.076630
4 5 3000 POINT (488943.113 4424599.478) 0.267952

2.2.3 Plotting the results

Make two plots of the results:

  1. A histogram of the NDVI values, using matplotlib's hist function. Include a vertical line that marks the NDVI = 0 threshold
  2. A plot of the street tree points, colored by the NDVI value, using geopandas' plot function. Include the city limits boundary on your plot.

The figures should be clear and well-styled, with for example, labels for axes, legends, and clear color choices.

In [40]:
fig, ax = plt.subplots(figsize=(8,6))

plt.grid(color='#A8A8A8', lw=0.5,linestyle='dashed')

ax.hist(st_trees['NDVI'], bins=50,color='#47614A')

ax.axvline(x=0, c='k', lw=2,linestyle='dashed')
ax.set_xlabel("Tree NDVI", fontsize=18)
ax.set_ylabel("Number of Trees", fontsize=18);
ax.set_title("Tree NDVI in Philadelphia", fontsize=18);
In [41]:
geo_philly = geo_evphilly
geo_philly = geo_philly.to_crs(c_limit.crs)
In [42]:
fig, ax = plt.subplots(figsize=(10,10))

divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)

c_limit.plot(ax=ax, edgecolor='black', facecolor='none', linewidth=4)

geo_philly.plot(ax=ax, edgecolor='black', facecolor='none', linewidth=0.1)

st_trees.plot(
    column='NDVI', 
    legend=True, 
    ax=ax,
    cax=cax,
    cmap='OrRd',
    s = 55,
    alpha = 0.5)

cx.add_basemap(ax,
               zoom=12, 
               crs=st_trees.crs)

plt.title("Brooklyn",fontsize=10)
ax.set_title("Tree NDVI in Philadelphia EPSG:32618", fontsize=18);